Second-level Cache Organization for Data Prefetching

نویسندگان

  • Sunil Kim
  • Alexander V. Veidenbaum
چکیده

This paper studies hardware prefetching for second-level (L2) caches. Previous work on prefetching has been extensive but largely directed at primary caches. In some cases only L2 prefetching is possible or is more appropriate. We concentrate on stride-directed prefetching and study stream buuers and L2 cache prefetching. We show that proposed stride-directed organizations/prefetching algorithms do not work as well in L2 caches and describe a new stride-detection mechanism. We study an L2 cache prefetching organization which combines a small L2 cache with our stride-directed prefetching algorithm. We compare our system with stream buuer prefetching and other stride-detection algorithms. Our simulation results show this system to perform signiicantly better than stream buuer prefetching or a larger non-prefetching L2 cache without suuering from a signiicant increase in the memory traac.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Impact of Timeliness for Hardware-based Prefetching from Main Memory

Among the techniques to hide or tolerate memory latency, data prefetching has been shown to be quite effective. However, this efficiency is often limited to prefetching into the first-level cache. With more aggressive architectural parameters in current and future processors, prefetching from main memory to the second-level (L2) cache becomes increasingly more important. In this paper, we exami...

متن کامل

ASEP: An Adaptive Sequential Prefetching Scheme for Second-level Storage System

In model storage systems, the multilevel buffer caches hierarchy is widely used to improve the I/O performance of disks. In the hierarchy, the referenced pages in second-level buffer cache have larger reuse distance that is the number of accesses between two references to the same block in a reference sequence. These reuse distances have close value with their lifetimethe time they are conserve...

متن کامل

ECE1718 Project Final Report Improving Data Locality During Thread-Level Speculation

Locality conflict is a major problem during thread-level speculation (TLS). This paper addresses three potential techniques for reducing data cache misses, namely universal prefetching, ORB prefetching and prefetching on speculative violation. Universal prefetching works by prefetching clean cache lines from the unified cache to all data caches when one of the data caches suffer a speculative r...

متن کامل

Improving L2 Cache Performance through Stream-Directed Optimizations

Research on caches has traditionally concentrated on the L1 cache. Most of the improvements in the design of L2 caches have been rather simple: increase in size and associativity. We believe that the L2 offers some unique opportunities for improvement. First, the L2 does not lie on the critical path, and can be made more complex. Second, the L1 filters out many temporal references to data block...

متن کامل

Two-level Data Prefetching

Data prefetching has been shown to be an effective tool in hiding part of the latency associated with cache misses in modern processors. Traditionally, data prefetchers fetch data into a small prefetch buffer near the L1 for low latency, or the L2 cache for greater coverage and less cache pollution. However, with the L1–L2 cache speed gap growing, significant performance gains can be obtained i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007